16 research outputs found

    Averting Robot Eyes

    Get PDF
    Home robots will cause privacy harms. At the same time, they can provide beneficial services—as long as consumers trust them. This Essay evaluates potential technological solutions that could help home robots keep their promises, avert their eyes, and otherwise mitigate privacy harms. Our goals are to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms. We posit that home robots will raise privacy problems of three basic types: (1) data privacy problems; (2) boundary management problems; and (3) social/relational problems. Technological design can ward off, if not fully prevent, a number of these harms. We propose five principles for home robots and privacy design: data minimization, purpose specifications, use limitations, honest anthropomorphism, and dynamic feedback and participation. We review current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie. We close by contemplating legal frameworks that might encourage the implementation of such design, while also recognizing the potential costs of regulation at these early stages of the technology

    Averting Robot Eyes

    Get PDF
    Home robots will cause privacy harms. At the same time, they can provide beneficial services—as long as consumers trust them. This Essay evaluates potential technological solutions that could help home robots keep their promises, avert their eyes, and otherwise mitigate privacy harms. Our goals are to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms. We posit that home robots will raise privacy problems of three basic types: (1) data privacy problems; (2) boundary management problems; and (3) social/relational problems. Technological design can ward off, if not fully prevent, a number of these harms. We propose five principles for home robots and privacy design: data minimization, purpose specifications, use limitations, honest anthropomorphism, and dynamic feedback and participation. We review current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie. We close by contemplating legal frameworks that might encourage the implementation of such design, while also recognizing the potential costs of regulation at these early stages of the technology

    Group trust dynamics during a risky driving experience in a Tesla Model X

    Get PDF
    The growing concern about the risk and safety of autonomous vehicles (AVs) has made it vital to understand driver trust and behavior when operating AVs. While research has uncovered human factors and design issues based on individual driver performance, there remains a lack of insight into how trust in automation evolves in groups of people who face risk and uncertainty while traveling in AVs. To this end, we conducted a naturalistic experiment with groups of participants who were encouraged to engage in conversation while riding a Tesla Model X on campus roads. Our methodology was uniquely suited to uncover these issues through naturalistic interaction by groups in the face of a risky driving context. Conversations were analyzed, revealing several themes pertaining to trust in automation: (1) collective risk perception, (2) experimenting with automation, (3) group sense-making, (4) human-automation interaction issues, and (5) benefits of automation. Our findings highlight the untested and experimental nature of AVs and confirm serious concerns about the safety and readiness of this technology for on-road use. The process of determining appropriate trust and reliance in AVs will therefore be essential for drivers and passengers to ensure the safe use of this experimental and continuously changing technology. Revealing insights into social group–vehicle interaction, our results speak to the potential dangers and ethical challenges with AVs as well as provide theoretical insights on group trust processes with advanced technology

    Averting Robot Eyes

    No full text
    Home robots will cause privacy harms. At the same time, they can provide beneficial services—as long as consumers trust them. This Essay evaluates potential technological solutions that could help home robots keep their promises, avert their eyes, and otherwise mitigate privacy harms. Our goals are to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms. We posit that home robots will raise privacy problems of three basic types: (1) data privacy problems; (2) boundary management problems; and (3) social/relational problems. Technological design can ward off, if not fully prevent, a number of these harms. We propose five principles for home robots and privacy design: data minimization, purpose specifications, use limitations, honest anthropomorphism, and dynamic feedback and participation. We review current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie. We close by contemplating legal frameworks that might encourage the implementation of such design, while also recognizing the potential costs of regulation at these early stages of the technology

    Measuring fiscal disparities across the U. S. states: a representative revenue system/representative expenditure system approach, fiscal year 2002

    No full text
    States and their local governments vary both in their needs to provide basic public services and in their abilities to raise revenues to pay for those services. A joint study by the Tax Policy Center and the New England Policy Center at the Federal Reserve Bank of Boston uses the Representative Revenue System (RRS) and the Representative Expenditure System (RES) frameworks to quantify these disparities across states by comparing each state’s revenue capacity, revenue effort, and necessary expenditures to the average capacity, effort, and need in states across the country for fiscal year 2002. ; The fiscal capacity of a state is the state’s revenue capacity relative to its expenditure need. A state with low fiscal capacity has a relatively small revenue base, a relatively high need for expenditures, or—as is often the case—a combination of both. ; The New England and Mid-Atlantic states tend to have high revenue capacity and low expenditure needs compared to the national average. Thus, states in these two regions tend to have high fiscal capacity, or a relatively high capability to cover their expenditure needs using own resources. South Central states, on the other hand, have low fiscal capacity—that is, a low level of revenue-raising capacity given what it would cost to provide a standard set of public services to their citizens. ; Little relation exists between the amount of federal aid received by states and their fiscal capacity; federal money is not primarily distributed to offset differences in the ability to raise revenues or provide services. Given the current level of federal funds allocated to state and local governments, 91 percent of the gap between revenue capacity and expenditure need across the states could be covered if federal funds were reallocated.Local government ; Local finance ; State finance ; Taxation

    Working paper

    Full text link
    Privacy is crucial for healthy relationships, but robots will impact our privacy in new ways—this warrants a new area of research. This paper presents work from the first workshop on privacy-sensitive robotics. We identify the seven research themes that should comprise privacy-sensitive robotics research in the near future: data privacy; manipulation and deception; trust; blame and transparency; legal issues; domains with special privacy concerns; and privacy theory. We intend for the research directions proposed for each of these themes to serve as a roadmap for privacy-sensitive robotics research
    corecore